通过各种物体学习各种灵巧的操纵行为仍然是一个开放的巨大挑战。虽然政策学习方法为攻击此问题提供了强大的途径,但它们需要大量的每任务工程和算法调整。本文试图通过开发预先保证的灵巧操纵(PGDM)框架来逃避这些约束,从而在没有任何特定于任务的推理或超级参数调整的情况下会产生各种灵活的操纵行为。 PGD​​M的核心是一种众所周知的机器人构建体,即pre grasps(即用于对象相互作用的手工置序)。这种简单的原始性足以诱导有效的探索策略来获取复杂的灵巧操纵行为。为了详尽地验证这些主张,我们介绍了TCDM,这是根据多个对象和灵巧的操纵器定义的50个不同操纵任务的基准。 TCDM的任务是使用来自各种来源(动画师,人类行为等)的示例对象轨迹自动定义的,而无需任何执行任务工程和/或监督。我们的实验验证了PGDM的探索策略,该策略是由令人惊讶的简单成分(单个预抓姿势)引起的,与先前方法的性能相匹配,这些方法需要昂贵的每任意功能/奖励工程,专家监督和高参数调整。有关动画可视化,训练有素的策略和项目代码,请参阅:https://pregrasps.github.io/
translated by 谷歌翻译
Semantic segmentation works on the computer vision algorithm for assigning each pixel of an image into a class. The task of semantic segmentation should be performed with both accuracy and efficiency. Most of the existing deep FCNs yield to heavy computations and these networks are very power hungry, unsuitable for real-time applications on portable devices. This project analyzes current semantic segmentation models to explore the feasibility of applying these models for emergency response during catastrophic events. We compare the performance of real-time semantic segmentation models with non-real-time counterparts constrained by aerial images under oppositional settings. Furthermore, we train several models on the Flood-Net dataset, containing UAV images captured after Hurricane Harvey, and benchmark their execution on special classes such as flooded buildings vs. non-flooded buildings or flooded roads vs. non-flooded roads. In this project, we developed a real-time UNet based model and deployed that network on Jetson AGX Xavier module.
translated by 谷歌翻译
Computing systems are tightly integrated today into our professional, social, and private lives. An important consequence of this growing ubiquity of computing is that it can have significant ethical implications of which computing professionals should take account. In most real-world scenarios, it is not immediately obvious how particular technical choices during the design and use of computing systems could be viewed from an ethical perspective. This article provides a perspective on the ethical challenges within semiconductor chip design, IoT applications, and the increasing use of artificial intelligence in the design processes, tools, and hardware-software stacks of these systems.
translated by 谷歌翻译
We consider distributed linear bandits where $M$ agents learn collaboratively to minimize the overall cumulative regret incurred by all agents. Information exchange is facilitated by a central server, and both the uplink and downlink communications are carried over channels with fixed capacity, which limits the amount of information that can be transmitted in each use of the channels. We investigate the regret-communication trade-off by (i) establishing information-theoretic lower bounds on the required communications (in terms of bits) for achieving a sublinear regret order; (ii) developing an efficient algorithm that achieves the minimum sublinear regret order offered by centralized learning using the minimum order of communications dictated by the information-theoretic lower bounds. For sparse linear bandits, we show a variant of the proposed algorithm offers better regret-communication trade-off by leveraging the sparsity of the problem.
translated by 谷歌翻译
The amounts of data that need to be transmitted, processed, and stored by the modern deep neural networks have reached truly enormous volumes in the last few years calling for the invention of new paradigms both in hardware and software development. One of the most promising and rapidly advancing frontiers here is the creation of new numerical formats. In this work we focus on the family of block floating point numerical formats due to their combination of wide dynamic range, numerical accuracy, and efficient hardware implementation of inner products using simple integer arithmetic. These formats are characterized by a block of mantissas with a shared scale factor. The basic Block Floating Point (BFP) format quantizes the block scales into the nearest powers of two on the right. Its simple modification - Scaled BFP (SBFP) - stores the same scales in full precision and thus allows higher accuracy. In this paper, we study the statistical behavior of both these formats rigorously. We develop asymptotic bounds on the inner product error in SBFP- and BFP-quantized normally distributed vectors. Next, we refine those asymptotic results to finite dimensional settings and derive high-dimensional tight bounds for the same errors. Based on the obtained results we introduce a performance measure assessing accuracy of any block format. This measure allows us to determine the optimal parameters, such as the block size, yielding highest accuracy. In particular, we show that if the precision of the BFP format is fixed at 4 bits, the optimal block size becomes 64. All theoretical derivations are supported by numerical experiments and studies on the weights of publicly available pretrained neural networks.
translated by 谷歌翻译
复发性神经网络(RNN)用于在数据序列中学习依赖性的应用,例如语音识别,人类活动识别和异常检测。近年来,GRUS和LSTM等较新的RNN变体已用于实施这些应用程序。由于这些应用中的许多应用都在实时场景中采用,因此加速RNN/LSTM/GRU推断至关重要。在本文中,我们提出了一种新型的光子硬件加速器,称为Reclight,用于加速简单的RNN,GRUS和LSTMS。仿真结果表明,与最先进的情况相比,重新调整的每位能量低37倍,吞吐量要高10%。
translated by 谷歌翻译
我们考虑主人想要在$ n $ Workers上运行分布式随机梯度下降(SGD)算法的设置,每个算法都有一个数据子集。分布式SGD可能会遭受散乱者的影响,即导致延迟的缓慢或反应迟钝的工人。文献中研究的一种解决方案是在更新模型之前等待每次迭代的最快$ k <n $工人的响应,其中$ k $是固定的参数。 $ k $的价值的选择提供了SGD的运行时(即收敛率)与模型错误之间的权衡。为了优化误差折衷,我们研究了在整个算法的运行时,以自适应〜$ k $(即不同的$ k $)调查分布式SGD。我们首先设计了一种自适应策略,用于改变$ k $,该策略根据我们得出的墙壁通行时间的函数,基于上限的上限来优化这种权衡。然后,我们建议并实施一种基于统计启发式的自适应分布式SGD的算法。我们的结果表明,与非自适应实现相比,分布式SGD的自适应版本可以在更少的时间内达到较低的误差值。此外,结果还表明,自适应版本是沟通效率的,其中主人与工人之间所需的通信量小于非自适应版本的沟通量。
translated by 谷歌翻译
深度神经网络(DNN)已被广泛使用,并在计算机视觉和自动导航领域起着重要作用。但是,这些DNN在计算上是复杂的,并且在没有其他优化和自定义的情况下,它们在资源受限平台上的部署很困难。在本手稿中,我们描述了DNN体系结构的概述,并提出了降低计算复杂性的方法,以加速培训和推理速度,以使其适合具有低计算资源的边缘计算平台。
translated by 谷歌翻译
我们考虑使用个性化的联合学习,除了全球目标外,每个客户还对最大化个性化的本地目标感兴趣。我们认为,在一般连续的动作空间设置下,目标函数属于繁殖的内核希尔伯特空间。我们提出了基于替代高斯工艺(GP)模型的算法,该算法达到了最佳的遗憾顺序(要归结为各种因素)。此外,我们表明,GP模型的稀疏近似显着降低了客户之间的沟通成本。
translated by 谷歌翻译
我们的商品设备中的大量传感器为传感器融合的跟踪提供了丰富的基板。然而,当今的解决方案无法在实用的日常环境中提供多个代理商的强大和高跟踪精度,这是沉浸式和协作应用程序未来的核心。这可以归因于这些融合解决方案利用多样性的有限范围,从而阻止它们迎合准确性,鲁棒性(不同的环境条件)和可伸缩性(多个试剂)的多个维度。在这项工作中,我们通过将双层多样性的概念引入多代理跟踪中的传感器融合问题来朝着这一目标迈出重要的一步。我们证明,互补跟踪方式的融合,被动/亲戚(例如,视觉探测法)和主动/绝对跟踪(例如,基础架构辅助的RF定位)提供了一个关键的多样性第一层,可带来可伸缩性,而第二层的多样性则是多样性的。在于融合的方法论,我们将算法(鲁棒性)和数据驱动(用于准确性)方法汇集在一起​​。 Rovar是这种双层多样性方法的实施例,使用算法和数据驱动技术智能地参与跨模式信息,共同承担着准确跟踪野外多种代理的负担。广泛的评估揭示了Rovar在跟踪准确性(中位数),鲁棒性(在看不见的环境中),轻重量(在移动平台上实时运行,例如Jetson Nano/tx2),以启用实用的多功能多多数,以启用实用的多功能,以实用代理在日常环境中的沉浸式应用。
translated by 谷歌翻译